OpenCV vision with EMC/linuxcnc
The feeders are all mounted such that the parts are presented to the pickup in the Y axis of the machine travel. So for each feeder, I move the camera over the part and adjust the Y (controlled by the machine movement) and the X of the feeder (controlled by the feeder itself) until the part is centered. This information is then saved for each feeder in use. So again, output to control would just need to be X,Y position data. BTW, that is all done manually - go figure.
The trays - which would lend itself better to the DIY commuity IMHO, just get stuck to a fixed bed in the PCB area. In my case I use double sidded tape to hold them down. Keep in mind, anything can be a tray to include short pieces of tape. To set up the tray position, I just jog the same fudical camera over to the center of the first part to find the reference for the "tray".This plus tray dims are saved for each tray.
Seems like the vision system really just needs to be able to spit out position datd for whatever it is looking at. I would guess there needs to be a signal from the control to say "take a picture", then the VS will send back data in some form such as X,Y,Theta. From there LinuxCNC will need to use this depending on what task is needed to be done.
I am not at all sure here, but this may be a job for a "somethingKIN"?
Yes, I did do one project that required working very close to the limit. For that project, I did manually use the limit switch for (effectively) homing.
I must emphasize that I am a newbie with no real experience. I just think it is a good idea to have a feedback system that tells me that the machine is approximately where it is supposed to be, and stepper-based systems do not intrinsically have such a system.
The single biggest help anyone could give me right now would be a pointer as to what to read first in the source code, even better would be a template/starting point/some instruction for creating HAL objects.
This looks like a good starting point:
I've got started with using python for the userspace hal module. So far opencv is happy alongside the hal, it appears to be capturing images in the loop in python. I say this because no errors are reported and cpu load increases, however the highgui window to display the frame will not show. Is this a limitation of the userspace hal components? I'm directly testing python opencv routines side by side, both in and out of linux cnc hal. What works and displays an image outside of linuxcnc does not work inside the python hal module.
Here is the pyhton script (sorry for the noise, it's only short) - I named this hal1:
#!/usr/bin/python import hal, time # import the necessary things for OpenCV from opencv import cv from opencv import highgui fiducialCam = 0 h = hal.component("hal1") h.newpin("in", hal.HAL_FLOAT, hal.HAL_IN) h.newpin("ok", hal.HAL_FLOAT, hal.HAL_OUT) h.newpin("x", hal.HAL_FLOAT, hal.HAL_OUT) h.newpin("y", hal.HAL_FLOAT, hal.HAL_OUT) h.newpin("z", hal.HAL_FLOAT, hal.HAL_OUT) h.newpin("theta", hal.HAL_FLOAT, hal.HAL_OUT) h.ready() try: highgui.cvNamedWindow ('Camera', highgui.CV_WINDOW_AUTOSIZE) # move the new window to a better place highgui.cvMoveWindow ('Camera', 10, 10) capture = highgui.cvCreateCameraCapture (fiducialCam) if not capture: print "Error opening capture device" sys.exit (1) while 1: #time.sleep(1) h['out'] = h['in'] # capture the current image frame = highgui.cvQueryFrame (capture) if frame is None: # no image captured... end the processing break # display the frame highgui.cvShowImage ('Camera', frame) #print "Does this work?" except KeyboardInterrupt: raise SystemExit
If you're going to try this out, the dependencies are satisfied on ubuntu 10.04 with linuxcnc already instaled with the following:
sudo apt-get install libcv4 libcvaux4 libcvaux-dev libhighgui4 libhighgui-dev python-opencv opencv-doc libcv-dev
Do you know of a way to write an image into a part of axis instead? Would that be a part of pyVCP or the other VCP?
Any ideas greatly appreciated,
-- edited code, grr - indentation... mumble mumble
The alternative to "Userspace" is "Realtime". I don't think you want to be doing the Vision in realtime...
I'm a c++ man at heart. Would I be better off working in my native language with a hal component that's not userspace?
I'm not loving the python-ness with opencv - c++ would be infinitely preferable!
Don't suppose you've got a link to a starting point for c++ userspace hal modules?
-- edit - found a sensible place to start - www.linuxcnc.org/docs/2.4/html/hal_comp.html#r1_13