CAD CAM EDM DRO - Yahoo Group Archive

Re: real time?

Posted by Ted
on 1999-06-29 01:58:58 UTC
From Ted Robbins
rtr@...

----------
> From: Jon Elson <jmelson@...>
> To: CAD_CAM_EDM_DRO@onelist.com
> Subject: Re: [CAD_CAM_EDM_DRO] real time?
> Date: Monday, June 28, 1999 11:27 PM
>
> From: Jon Elson <jmelson@...>
>
>
>
> TADGUNINC@... wrote:
>
> > From: TADGUNINC@...
> >
> > Fortunately I have the support of my girl friend in learning all this
new
> > stuff about Linux EMC and cnc stuff, not to mention everyone on the
list!!!!
> > So time for another no brainer for you guys...what does real time mean
and
> > how do the window based cad programs differ from Linux?
>
> Linux, by itself, is not a real time system, either. The definition of
real time is
> that tasks that are necessary can get the CPU time they need to run,
without
> any possibility of being interfered with by disk accesses, memory paging,
other
> tasks, even processor error handling. A secondary aspect is that
interrupt
> latency should be small. The real time patches to Linux definitely
satisfy both
> of these needs. The real time scheduler parcels out CPU time to Linux
only
> when the real time modules don't need it. As long as the real time
processes
> don't hog the cpu, you never even know they are there.
>
> Windows-based CAD programs don't do motion control, so they don't
> need to be real-time. There are some motion programs that DO control
> machines, and unless they are supported by real-time extension packages,
> the motion will be choppy, with the possibility of just sitting there
(with
> steppers) or leaving the machine moving (with servos) for several seconds
> at a time. VERY bad!
>
> And, the guys at NIST tried out some real-time extensions for Windows
> NT, and the results were that the did work, but the interrupt latency was
> VERY bad. Many times worse than the RT Linux.
>
> Jon
>
>
Jon's discription of real time, like his method of getting the count of an
encoder, is accurate and succinct. I'd like to add a wordier real time
commentary to put software speed in a broader perspective, and in the
interest of laying the groundwork for a FAQ, if Bill, the list manager can
be persuaded to take on that editing task.

My daughter, having programmed graphic and virtual reality languages,
defines real time as the ability to keep up with the monitor's demand for
pixels. Machine control applications need to keep up with the movement of
the axes of motion and other actions which the machine might be required to
take. As in the graphics definition of real time, the tasks required set
the speed requirements. While a single scan of the monitor may be garbaged
by failure to keep up, as Jon pointed out, the failure to keep up in
machine control can be more serious.

A computer is not a real time device. The way it can be made to handle
real time tasks is to make it run faster than real time requirements.
Remember the definitions of internet years. People who write software for
spreadsheets or word processors are not constrained by real time
requirements, so the operating systems under which they run can safely take
time out to perform housekeeping tasks or whatever tasks the user or
programmer assign. Real time programs, or patches on software not intended
to operate in real time, keep the computer from paying attention to
anything that will distract it from doing the real time job, or allowing it
to perform these background tasks in small snatches of time.

This works, but there are faster methods. I will describe them so that you
can consider them when the normal software for real time just isn't fast
enough for your real time tasks.

The fastest method of accomplishing electronic control tasks is with
hardware designed for the task. In a sense, this is also programming.
When you design a boolean equation with logical Ands, Ors, and Nots, the
building block circuits of computation, it is no less a programming
excercise than Fortran or the newer languages. It is accomplished with
wires, or traces on a printed circuit board, or connections within an
integrated circuit, rather than with lines of code.

Programming definitions, the methods of connection, get pretty fuzzy when
the connections are specified with a programming Hardware Discription
Language, or the software is specified with a wiring patchboard, as in
computers built in the 1950's and before.

When controls are designed this way, latency, the built in time delay, is
determined by the time it takes for transistors to charge and discharge
like capacitors, and the time it takes for signals to travel down wires.
Increased speed is accomplished by shrinking the size of components,
reducing the distance between them, and making the circuits themselves work
faster by redesigning them.

Software comes in four speed flavors. Interpreted code is the slowest, but
the easiest to debug. Each instruction you, as a programmer, feed to the
computer gets converted to machine language as it is read by the computer,
and then performed. This time spent in interpreting hurts speed, but it
makes finding your programming errors much faster.

Compiled code is code that the program runs through before it tries to make
the program go. It converts everything that it can into machine language
before it starts the application running, that is, it compiles the program.
The result is much faster code, the cost is slower debugging.

Luckily, in many computer languages, you can get the best of both worlds.
These languages exist as both interpreted versions which let you debug
programs written in them quickly, and in compiled versions which run much
faster after you have debugged them using interpreted versions.

The middle ground between the speed of hardware and the fast, but sometimes
inadaquate speed of compiled real time software is machine language
programming. This is not really within reach of most of us today because
the microprocessor is microprogrammed by its designers with its own
language. This microprogrammed language is what we call machine language
today, knowing that we are already a step removed from machine language in
most machines. In machine language programming you ignore operating
systems, real time patches, and high level language constructs. Instead,
you regard the microprocessor as a collection of hardware, a collection of
preconnected boolean circuits. You tell the machine what to do at the
hardware level.

The problem with this method is the inefficiency of programming when you
must write dozens of lines of machine code to do what a high level language
does with a single short statement. There are a couple of ways around this
disadvantage.

The most common is to write the code in a high level language and bench
mark it. That is, find out what part of the code is taking up the most
time when the task can least afford it. Then you rewrite that small part
of the code in machine language, leaving the rest of it in the code of a
high level language which is more efficient with programmers' time and less
efficient with machine time. C++, a high level language, has the
capability to insert machine language casually right in the middle of a
line of their higher level code.

The other method is to write in languages which are designed to give you
the best of both worlds. My favorite is Forth. It mixes interpretetation,
compilation, machine language, and the flavor of microcoding to run nearly
as fast as a machine language program. It is reasonably fast to program.

The method you probably can't use is to microcode microprocessors
themselves. However, it is getting easier to accomplish this because
microprocessors are becoming just software discriptions of intellectual
property, programs themselves. You could modify their microcode if you
wanted to.

Fortunately, that is not necessary. Real hotshots, like Charles Moore, the
inventor of Forth, are hired by microprocessor makers to write microcode
for narrow ranges of applications. If you are willing to forego making
chips for a long time, you can move up the learning curve on one of these
chips and build a really fast machine control without having to wire it by
hand.

Speaking of wiring by hand, the original numerical controls were made that
way. They were not called CNC, just NC. Its nice that we don't have to
make numerical controls that way any more.


> --------------------------- ONElist Sponsor ----------------------------
>
> With more than 20 million e-mails exchanged daily...
> http://www.onelist.com
> ...ONElist is home to the liveliest discussions on the Internet!
>
> ------------------------------------------------------------------------
> Welcome to CAD_CAM_EDM_DRO@...,an unmodulated list for the
discussion of shop built systems in the above catagories.
> To Unsubscribe, read archives, change to or from digest.
> Go to: http://www.onelist.com/isregistered.cgi
> Log on, and you will go to Member Center, and you can make changes there.

> bill,
> List Manager

Discussion Thread

TADGUNINC@x... 1999-06-28 20:07:53 UTC Re: real time? Jon Elson 1999-06-28 23:27:43 UTC Re: real time? Ted 1999-06-29 01:58:58 UTC Re: real time? TADGUNINC@x... 1999-06-29 07:50:15 UTC Re: real time? WAnliker@x... 1999-06-29 10:24:43 UTC Re: real time? Jon Elson 1999-06-29 12:32:18 UTC Re: real time? Tim Goldstein 1999-06-29 12:59:03 UTC Re: real time? Ted 1999-06-29 13:23:33 UTC Re: real time?