Message boards : Wish list : GPU Comparation
Author | Message |
---|---|
I wish there would be an statistic with GPU comparation. It wouldn't be too hard to do: | |
ID: 13775 | Rating: 0 | rate: / Reply Quote | |
You don't need all that. | |
ID: 13781 | Rating: 0 | rate: / Reply Quote | |
Yes, but I wonder the effect of main memory and CPU. Right now I'm building one computer with 3 GPUs | |
ID: 13842 | Rating: 0 | rate: / Reply Quote | |
Stay away from the GT9800s! | |
ID: 13859 | Rating: 0 | rate: / Reply Quote | |
Not really sure where's the problem. At SETI CUDA I have seen many post with people with errors. CUDA 3.00 is beta. I guess too this "home" hardware is not 100% prepared to run it 24h/7. But if you buy a server GPU it rises up to a minimum 300 € or higher. | |
ID: 13883 | Rating: 0 | rate: / Reply Quote | |
When you ask for advise, perhaps you should listen to it! | |
ID: 13912 | Rating: 0 | rate: / Reply Quote | |
When you ask for advise, perhaps you should listen to it! Right, but before make a statement you should have all the information. First I already bought the cards. Second, I have a meeting next month with an expert from a company to use this technology in 30 computers for bussiness. 30 x 300 is a lot more than 30 x 100. After talking with him I think I'll have a proffessional point of view. Third, there's a lot of post here and in SETI that say different things. I'm getting confused with all the oppinions. I prefer to test the cheap option before the expensive one. Just playing with science. | |
ID: 13926 | Rating: 0 | rate: / Reply Quote | |
WRT buying cards that can participate at the GPUGrid, I would Much rather you chose one GTX275 than three GT9800 cards! | |
ID: 13928 | Rating: 0 | rate: / Reply Quote | |
The GT9800 will not work well with all tasks and you may get more failures than successes. I din't know this What is the point in retrospectively asking IT experts here about what you should do, if you have already spent the money? The money I spent for now it's just for testing. Really he is not a IT expert, he is a CUDA expert. I'm a IT expert. What I whant from him is knowing why there are so many failures in cards?. If is a software or a hardware failure?. It really depends on models??? Third reason. Even if the GT9800 managed to get through all the tasks, and they wont, three cards would do less work than one GTX275. In fact it would take 4 GT9800 cards to come close to the performance of one GTX275. According to this one and $/Gflops (I don't care for credit) this is not exactly true (see table below). GTX275=0,21$/Gflops GeForce 9800 GT=0,12$/Gflops Fourth reason. If you have 3 cards, there is more chance that one will fail. I dought this statement. If you have 3 cards is much less probable that all 3 of them break (This is statistical fact). Here I wonder if the task is shared between 3 task, but as long as I have seen is not like that. Sixth, and most important reason. The experts Here are saying what will work Here. For sure if you whant a high level card that one is the best. But the post I followed was this one. Here analices the $/GFlop. I made a few numbers (source wikipedia Flops and prices): Model ------------------> $ -----> Gflops ----->$/Gflops GeForce 9800 GT ---------> 60 ----> 504 ----> 0,12 GeForce GTS 250 --------> 140 ---> 705 --->0,20 GeForce GTX 260 -------->150 ---> 715 ---> 0,21 GeForce GTX 275 -------->210 ---> 1010,88 ---> 0,21 GeForce GTX 295 -------->470 ---> 1.788 ---> 0,26 I still think all the information around forums is too messy, and quite complicated to read throught. I think it would be much better to make a wiki-gpugrid between a few people and organize all the information... too much waste of time (for me this is much more important that the cost of electricity and cards). | |
ID: 13942 | Rating: 0 | rate: / Reply Quote | |
In practice, | |
ID: 13950 | Rating: 0 | rate: / Reply Quote | |
In practice, If I knew this before I would have saved money and I spend many hours reading the post to try to find thin info :o( :o( :o( Thanks GDF... This confirms one of my principles "you always have to ask the one really knows the answer". | |
ID: 13952 | Rating: 0 | rate: / Reply Quote | |
A GTX260 216sp with a 55nm GT200 Revision B core will work perfectly too. | |
ID: 13959 | Rating: 0 | rate: / Reply Quote | |
Thanks very much, after all I saw the light. Some have a GT200 core and some are 65nm. Dont get one of those I can't even find that info in NVIDIA web pages, nor in the box, nor in the card I have in may hands (not going to open it and loose the warranty). I got sick. I get one GTX275 and forget about this. I recommend people that get into same situation take this decisition at first (just watch out your power source, the card has over 200W consumtion). I recomend admin to complete rewrite/delete the article in the main page. Just ended with a terrible headache!! | |
ID: 13961 | Rating: 0 | rate: / Reply Quote | |
I agree that GPU information is hard to come by. It can be found, but it takes a lot of individual effort and learning. Not good for newcomers to the project. | |
ID: 13965 | Rating: 0 | rate: / Reply Quote | |
I agree with you, but I think the post should be easy enought so anyone can understand it without too much effort. | |
ID: 13966 | Rating: 0 | rate: / Reply Quote | |
Well its there, and its simple. Hopefully it is detailed enough to act as a guide. | |
ID: 13968 | Rating: 0 | rate: / Reply Quote | |
In practice, Does Nvidia make the source code for the FFT routines available? I'm thinking of learning enough CUDA to have a try at fixing it. I could then test it on my 9800 GT. Also how practical would it be to make a CUDA software build for the recent Nvidia cards with the CUDA 2.3 SDK, and a separate software build for the G90 cards with an older SDK that still supported the G90? I am NOT still able to do hardware work for my computers; the company I have found that will do it for me (HP) does not offer computers with anything higher than the GTX260. In the meantime, would it be practical to offer two separate lists of tasks, one of which will run on a G90 and one for which a G90 is not reliable enough? | |
ID: 14146 | Rating: 0 | rate: / Reply Quote | |
The GT9800 will not work well with all tasks and you may get more failures than successes. I'm not sure that's true for all 9800s. My 9800 GT appears to succeed for most of the workunits it gets. However, I don't know if the server is sending it only the type of workunits it can handle. I believe it uses a G92b instead of a G92. However, I've seen some articles saying that the GT240 is now more cost-effective for GPUGRID than the GTX275 or the 9800 GT, so I'm thinking of ordering some of them. For example, two GT240s can run on about as much power as just one 9800 GT, so if there are enough card slots I can probably double my GPU computing power without replacing the power supply. If you have fewer empty slots but plenty of power supply and cooling capability, the GTX275 is probably still the better choice, though. It's also a better choice if GPUGRID is planning to start requiring cards with compute capability 1.3, but I don't remember seeing that mentioned. Use for something other than GPUGRID may require choosing cards with more memory, though, even though these cards are less cost-effective for GPUGRID. Another thing - the GT240 is more recent than the 9800 GT, so Nvidia is likely to continue offering good support for it longer. | |
ID: 15452 | Rating: 0 | rate: / Reply Quote | |
Generally I would say anything that is not a G200b or above could have reliability issues, and I would recommend getting a newer card over an older card every time. However, it is very difficult to say don’t use this card or that card, because there are so many versions, and some seem to work while others just don’t. I expect there is a big difference between a G92 and a G92b. The G92b is a revision, and no doubt overcame issues with the previous version. | |
ID: 15465 | Rating: 0 | rate: / Reply Quote | |
The GT240 cant be used on Milkyway; GT240s are CC1.2 and Milkyway require CC1.3 cards (GTX260 and up). But that’s their loss, and a naive move in my opinion; they are much closer to CC1.3 than CC1.1 cards (GT240s use a GT215, and benefit from similar improvement factors). Given that there will only be between 5000 and 8000 Firmi cards released, and perhaps only a few will end up in the hands of GPUGrid crunchers, I don’t think GPUGrid can afford to move away from CC1.2 cards any time soon, and if they do they are likely to first move away from CC1.1. Although there might be new mid range NVidia cards in the summer or autumn, they may be anything from CC1.2 to some new unknown Compute Capable rating. I've read enough on the Milkyway site to find that their science cannot get useful results without large parts of the calculations in double precision, and therefore have no real choice about using cards that won't handle double precision. As far as I can tell, it would be easier for me to afford a GTX 275 than to get it installed. I'm looking into higher memory, but mainly in order to be able to have more chance of participating in future GPU BOINC projects closer to the types of medical research I'm most interested in. | |
ID: 15479 | Rating: 0 | rate: / Reply Quote | |
Message boards : Wish list : GPU Comparation