|
how to use gpu for accelerate the recognition process jean phil from Canada [3 posts] |
10 year
|
hi there do someone know a way to use gpu(videocard) for accelerating the efficiency of roborealm.i think there is a way with cuda but befor trying i just ask if someone already do something like that because gpu is really the thing that can accelerate video analysing.
|
|
|
Carl from Your Country [1447 posts] |
10 year
|
Jean,
Yes there is but it requires rewriting most of the way the modules work. For now, we have focused on utilizing more CPUs until the GPU market becomes more accessible in a smaller form factor. We have been looking at devices like the Gizmo which provide GPU type functionality in a small in-expensive packages but have not yet released anything for those architectures.
Is there a particular module that you are wanting to speed up?
STeven.
|
|
|
jean phil from Canada [3 posts] |
10 year
|
multiple object recognition if you already make it face recognition too.my project is a jhonny five robot automated able to interact by voice.
|
|
|
Carl from Your Country [1447 posts] |
10 year
|
The object recognition module does utilize multiple CPU's if you have them. Thus you will notice a speedup if you add more CPUs. We currently do have face detection but not face recognition. That is a module still in the works ... You can, however, use the Haar method in the object recognition module to identify specific faces. This is not ideal since rotation is not handled well by this method unless you train it on multiple angles of your face. Again, that method will use more than 1 CPU if available.
Thanks,
STeven.
|
|