live is certainly interesting, it was the year 2001 and I told my self I never use c++ again, since java is so much nicer and c++/c is so error prone.
Now it happened that I found a paper about cuda, which basically allows you me to run calculations on the nvidia 8800 GPU's. Drawback, you have to write in a mixed style of C/C++.
Well, atleast I learn a lot, but why should I use the GPU in the first place? Now I use a mac pro with 2x2 cpus or a ibm X60 with 1x2 cpu for my daily work and both have quite some memory (8GB Ram and 2GB Ram) so I should be set with this for my daily live. Now a GT8800 has one GPU which has 128 Suprocessing units, basically meaning that you got 128 CPU's todo work with you. Now you could combine several of these cards together, in my case i could use up to 4 of these cards in my mac, which would give me 512 CPU's.
Now I'm sitting here und starting to learn good old C again and already registered a new project on sourceforge, you need to get references somehow...
the name, jacuda.
What do I want todo? Provide a java/python/groovy/C API to run scientific calculations on cuda enable graphic cards. And hope to speed up some calculations with this at work. Like comparing 2k vs 2k bins, my java programm only needs 5h todo this and maybe this makes it much much faster. The final goal is to write a cuda implementation of some CDK algorythm, since I got the feeling that many people actually need this feature. Some algorythms of the CDK are just to slow compared to other programs.
Saturday, June 07, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment