If you have, say, a 320x200 screen that's refreshing at 60Hz, you need
to time the light-pen's pulse to within 1/60*320*200 == 1/3_840_000 of
a second to get within-one-pixel positioning information. That's not
child's play, but it's not rocket science either. (It's actually a
little bit worse than this because of the retraces. But I'm not sure
how fast the old CGA screens refreshed, either; I suspect it might have
been slower than this.)
But if you have, say, a 1280x1024 screen that's refreshing at 85Hz, you
need to time the light-pen's pulse to within 1/1280*1024*85 =
1/111_411_200 of a second. That's about nine nanoseconds. This is a
little tougher, or so I've heard, anyway. It seems it would be
especially tough if you're transmitting your timing signals over USB,
which transmits a bit every 83 nanoseconds or so, and thus a byte every
666 ns or so.
I'm a little bit puzzled about why light pens need to be calibrated;
doesn't the video card *know* when it tells the CRT to sweep the beam
past the pixel at (327, 445)? Or is it a matter of compensating for
delays in the light pen itself? Maybe this would be obvious to me if I
knew more about light pens and CRT controllers. Or maybe it's just a
deficiency of FTG Data's designs.
If I get a touch screen, it damn well better be horizontal, btw.
I think this is an exceedingly cool idea, nevertheless.
-- <firstname.lastname@example.org> Kragen Sitaker <http://www.pobox.com/~kragen/> Sun Oct 17 1999 23 days until the Internet stock bubble bursts on Monday, 1999-11-08. <URL:http://www.pobox.com/~kragen/bubble.html>