python - Starting OPEN CL - How to bring a GPU card to its maximum? -


i new gpu computing , need advice , since seems open cl becomes new industry standard move on it, instead of cuda.

so used python , multiprocessing fantastic , simple tool. want expand processing capacity gpu power. far have 1 function needs calculated. so far calling function numbers calculated , result after 10 seconds.

how can open cl , best tool program open cl under python ?

it possible use decorator, send function gpu card , calculate in light speed ? if possible want sent function several thousand times gpu parallel processing 100% calculation power ?

how can , open cl right tool of doing ?

any advice or demo code appreciated.

regards frank

the popular method of using opencl python pyopencl. pyopencl full wrapper around opencl api, provides every piece of opencl functionality within python, along nice pythonic simplifications. it's not quite easy adding decorator function, it's still pretty straightforward , running it. there's set of documentation in above link examples, , there's set of examples in hands-on opencl tutorial university of bristol.

there have been couple of attempts simplify python+opencl experience further providing single-source approach similar after. clyther 1 such attempt, although doesn't seem active @ moment , don't believe ever reached 'release'. more recent attempt urutu, seems under development shows promise (see poster @ gtc). haven't used either of these yet cannot vouch them personally.

to answer final question: yes, if have parallel workload , looking portable gpu acceleration, opencl right tool you.


Comments

Popular posts from this blog

java - Intellij Synchronizing output directories .. -

git - Initial Commit: "fatal: could not create leading directories of ..." -